A B C D E F G H I J K L M N O P Q R S T U V W X Y Z All
Kagalkar, Ramesh M.
- Automatic Graph Based Clustering for Image Searching and Retrieval from Database
Authors
1 Dept. of Computer Engineering, Dr. D. Y. Patil School of Engineering and Technology Lohegaon, Pune, IN
2 Dept. of Computer Engineering, R.H. Chaapte College of Engineering Nashik, IN
Source
Software Engineering, Vol 8, No 2 (2016), Pagination: 39-49Abstract
Content-based image retrieval and searching is one of the most burning issues in the field of multimedia computing. Human perception is not understood well enough to automate the retrieval process. In this work we have designed a system for content-based image searching. This system uses multiple cues (features) for image searching and retrieval. Since most of the features have some drawbacks, we use the cues that are free from drawbacks like geometrical transforms and viewpoint variation. We present the results based on these cues. A heuristic for combining the result of different cues to increase the accuracy of the system is developed. Databases of different size were used to estimate the accuracy of the system. Global shape descriptor of images and object based descriptors are extracted for the retrieval of images. Multimedia databases are very big in size, so we cannot go for exhaustive searching of images from these databases. For this purpose an automatic graph-based clustering algorithm is developed to reduce the searching time of the images from the database. The proposed algorithm works on the concept of minimum spanning tree that removes the inconsistent edges from tree, based on the dynamic threshold provided to the algorithm. The proposed algorithm reduces the search time for the retrieval without much loss in the accuracy. We found out that careful combination of the different cues, based on our proposed heuristic, can increase the retrieval accuracy up to a noticeable extent.
Keywords
Content-Based Image Retrieval (CBIR), Colour Coherent Vectors (CCV) and Query by Image Content (QBIC).- Self-Educating Tool Kit for Kids
Authors
1 Dept of Computer Engineering, Dr. DYPTCET, Lohegaon, Pune, Maharashtra, IN
Source
Software Engineering, Vol 8, No 1 (2016), Pagination: 11-16Abstract
In this paper, we deliver a top level view of an interactive self -gaining knowledge of tool which we have got developed for kid’s elderly from 1 to 5 years antique in India. Through developing this software, we goal at promoting self-mastering and growing facts and verbal exchange era capabilities amongst the children in India. The getting to know device developed is based totally specifically on the science curricula blanketed at higher primary and decrease secondary levels, inside the Indian schools. Our software program does no longer intend to render out of date or replace the prevailing pedagogical processes. However, it's going to complement the existing teaching and gaining knowledge of methods.
Keywords
E-Learning, Information and Communication Technology.- A Review Paper on Predictive Sound Recognition System
Authors
1 Department of Computer Engineering, Dr. D. Y. Patil School of Engineering and Technology, Lohegaon, Pune, IN
Source
Software Engineering, Vol 7, No 6 (2015), Pagination: 159-163Abstract
The proposed research objective is to add to a framework for programmed recognition of sound. In this framework the real errand is to distinguish any information sound stream investigate it & anticipate the likelihood of diverse sounds show up in it. To create and industrially conveyed an adaptable sound web crawler a flexible sound search engine. The calculation is clamor and contortion safe, computationally productive, and hugely adaptable, equipped for rapidly recognizing a short portion of sound stream caught through a phone microphone in the presence of frontal area voices and other predominant commotion, and through voice codec pressure, out of a database of over accessible tracks. The algorithm utilizes a combinatorial hashed time-recurrence group of stars examination of the sound, yielding ordinary properties, for example, transparency, in which numerous tracks combined may each be distinguished.Keywords
Fingerprinting, Pure Tone, White Noise.- Review Paper:Detail Study for Sign Language Recognization Techniques
Authors
1 VTU, Belgaum, Karnataka, IN
2 Department of Computer Engg, R. H. Sapat College of Engineering, Nashik, Pune, Maharashtra, IN
Source
Digital Image Processing, Vol 8, No 3 (2016), Pagination: 65-69Abstract
This paper reviews the intensive state of the art in automatic recognition of continuous signs, from different languages, supported the information sets used, features computed, technique used, and recognition rates achieved. In this paper discover that, in the past, most work has been tired finger-spelled words and isolated sign recognition, but recently, there has been vital progress within the recognition of signs embedded briefly continuous sentences. Paper tend to conjointly realize that researchers are getting down addressing the necessary downside of extracting and integration non-manual data that is gift in face and head movement and present results from experiments integration of non-manual options.
Keywords
American Sign Language (ASL), Hidden Marko Model (HMM) and Extended Multi Modal Annotation (EMMA).- Methodology for Translation of Sign Language into Textual Version in Marathi
Authors
1 Department of Computer Engineering, Savitribai Phule University of Pune, IN
Source
Digital Image Processing, Vol 7, No 8 (2015), Pagination: 225-229Abstract
In later year's gesture based communication acknowledgment has turned in a standout amongst the most developing fields of examination and it is the most characteristic method of correspondence for the individuals with listening to issues. A hand signal acknowledgment framework can give a chance to hard of hearing persons speak with typical individuals without the need of a translator or middle. We are going to construct a framework and techniques for the programmed acknowledgment of the Marathi communication via gestures. Through that we are giving instructing classes to the reason for preparing the hard of hearing sign client in Marathi. The framework does oblige hand to be appropriately adjusted to the camera and does not require any wearable sensors. A substantial arrangement of tests has been utilized as a part of the proposed framework to perceive confined words from the standard Marathi communication through signing, which are taken before the camera with distinctive hard of hearing sign client. In our proposed framework, we mean to perceive some extremely essential components of gesture based communication and to make an interpretation of them to content and the other way around. The proposed framework utilizing 46 Marathi letters in order for acknowledgment.Keywords
Marathi Sign Language, Hand Gesture Recognition, Canny’s Edge Detection, Processing, Feature Extraction, Pattern Recognition/Matching, Gray Scale Image, Database.- Methodologies for Tumor Detection Algorithm as Suspicious Region from Mammogram Images Using SVM Classifier Technique
Authors
1 Department of Computer Science, Sri Siddhartha Institute of Technology, Tumkur, Karnataka, IN
2 Department of Computer Science, Rural Engineering College, Hulkoti, Karnataka, IN
3 Department of Electronics & Communication Engineering, Sri Siddhartha Institute of Technology, Tumkur, Karnataka, IN
Source
Digital Image Processing, Vol 3, No 19 (2011), Pagination: 1202-1207Abstract
This paper presents a tumor detection algorithm from mammogram. The proposed system focuses on the solution of two problems. One is how to detect tumors as suspicious regions with a very weak contrast to their background and another is how to extract features which categorize tumors. The tumor detection method follows the scheme of mammogram enhancement, the segmentation of the tumor area, the extraction of features from the segmented tumor area and the use of SVM classifier. The enhancement can be defined as conversion of the image quality to a better and more understandable level. The mammogram enhancement procedure includes filtering, top hat operation, DWT. Then the contrast stretching is used to increase the contrast of the image. The segmentation of mammogram images has been playing an important role to improve the detection and diagnosis of breast cancer. The most common Segmentation method used is thresholding. The features are extracted from the segmented breast area. Next stage include, which classifies the regions using the SVM classifier. The method was tested on 75 mammographic images, from the mini-MIAS database. The methodology achieved a sensitivity of 88.75%.Keywords
Computer Aided Diagnosis CAD, Computed Tomography CT, Support Vector Machine SVM and Discrete Wavelet Transform DWT.- Hindi Language Document Summarization Using Context Based Indexing Model
Authors
1 Department of Computer Engineering, Dr. D. Y. Patil School of Engineering and Technology, Lohegaon, Pune, IN
2 Department of Computer Engineering, Dr. D. Y. Patil School of Engineering and Technology, Charoli, B. K. Via, Lohegaon, Pune, Maharashtra, IN
Source
Data Mining and Knowledge Engineering, Vol 8, No 1 (2016), Pagination: 1-6Abstract
Hindi Document Summarization (DS) is an Information Retrieval (IR) process in which summery of document is extracted to provide overview of that document. Existing document summarization models generally use the similarity among sentences in the original document to extract the maximum significant sentences. The documents along with the sentences are generally indexed using standard term indexing computation techniques, which do not take into account the context related to document. Thus, the similarity values of sentence are independent of the context. In this paper, a context sensitive document indexing model is propose which based on the Bernoulli model of randomness for Hindi text document. The Bernoulli model has been used to check the probability of the co-occurrences of two terms in a large set of documents.Keywords
Document Summarization, Lexical Association, Context Indexing.- New Frame Work for Translation of Sign Language Action into Text Description in Kannada
Authors
1 Department of Computer Engineering, Dr. D Y Patil School of Engineering and Technology, Pune, IN
2 Department of Computer Engineering, R. H. Sapat College of Engineering, Nashik, Maharashtra, IN
Source
Digital Image Processing, Vol 8, No 10 (2016), Pagination: 315-319Abstract
In later year's gesture based communication acknowledgment has turned in a standout amongst the most developing fields of examination and it is the most characteristic method of correspondence for the individuals with listening to issues. A hand signal acknowledgment framework can give a chance to hard of hearing persons speak with typical individuals without the need of a translator or middle. Proposed system is going to construct a framework and techniques for the programmed acknowledgment of the Kannada communication via gestures. Through that we are giving instructing classes to the reason for preparing the hard of hearing sign client in Kannada. The framework does oblige hand to be appropriately adjusted to the camera and does not require any wearable sensors. A substantial arrangement of tests has been utilized as a part of the proposed framework to perceive confined words from the standard Kannada communication through signing, which are taken before the camera with distinctive hard of hearing sign client. In proposed framework, we mean to perceive some extremely essential components of gesture based communication and to make an interpretation of them to content and the other way around. The proposed framework utilizing 36 Kannada letters in order for acknowledgment.
Keywords
Kannada Sign Language, Hand Gesture Recognition, Canny’s Edge Detection, Processing, Feature Extraction, Pattern Recognition/Matching, Gray Scale Image, Database.- Survey on Real Time Hand Gesture Recognition Techniques
Authors
1 Department of Computer Sci. & Engineering, Dr. DY Patil School of Eng. and Technology, Pune, IN
Source
Biometrics and Bioinformatics, Vol 9, No 4 (2017), Pagination: 65-69Abstract
In the today’s busy world gestures carries an important role in everyday life of human being in order to convey their motions and data. And hence gesture recognition is necessary part of Human Computer Interaction (HCI). In recent years the Human Computer Interaction (HCI) become an attractive field for researchers. Hardware devices like keyboard, mouse, joystick can be replaced with compatible touch less environment with computer interaction. There are many issues in video processing. The task of video processing becomes difficult when there are shadows, moving objects and changing light effects present in video. The approach of this paper is to analysis methods and techniques used for video processing and gesture recognition by different researches. This paper review dynamic hand gesture recognition for video processing.
Keywords
Video Processing, Image Processing, Hand Gesture Recognition, Machine Learning, Support Vector Machine.- Support Vector Machine Based Approach for Translating Video Sceneries to Natural Language Descriptions
Authors
1 Department of Computer Engineering, Dr. D Y Patil School of Engineering and Technology, IN
Source
ICTACT Journal on Image and Video Processing, Vol 7, No 4 (2017), Pagination: 1482-1488Abstract
Human uses communication language either by written, spoken or typed to describe visual the world around them. So, the study of text description for any video goes increasing. This paper represents a framework that gives output as a description for any video having a maximum size of 50 seconds by using natural language processing. The framework is divided into two sections called training and testing. The training section is used to train the video with its description like activities of objects present in that video. The trained data is stored into the database with its features of scenario of video. Another section is testing section. The testing section is used to test the video and retrieve the output as description of video. By using Natural language processing sentences are generated from objects and their activities present in the video.Keywords
Natural Language Processing, Video Processing, Video Recognition.References
- G. Kulkarni, V. Premraj, V. Ordonez, S. Dhar, Siming Li, Y. Choi, A.C. Berg and Tamara L. Berg, “BabyTalk: Understanding and Generating Simple Image Descriptions”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 35, No. 12, pp. 2891-2903, 2013.
- N. Krishnamoorthy, G. Malkarnenkar, R. Mooney, K. Saenko and S. Guadarrama, “Generating Natural-Language Video Descriptions using Text-Mined Knowledge”, Proceedings of 27th Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence, pp. 541-547, 2013.
- Andrei Barbu et al., “Video in Sentences Out”, Proceedings of 28th Conference on Uncertainty in Artificial Intelligence, pp. 102-112, 2012.
- Marcus Rohrbach, Wei Qiu, Ivan Titov, Stefan Thater, Manfred Pinkal and Bernt Schiele, “Translating Video Content to Natural Language Descriptions”, IEEE International Conference on Computer Vision, pp. 433-440, 2013
- S. Gupta and R.J. Mooney, “Using Closed Captions as Supervision for Video Activity Recognition”, Proceedings of 24th Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence, pp. 1083-1088, 2010.
- Chih-Chung Chang and Chih-Jen Lin, “Libsvm: A Library for Support Vector Machines”, ACM Transactions on Intelligent Systems and Technology, Vol. 2, No. 3, pp. 1-27, 2011.
- Marie-Catherine de Marneffe, Bill MacCartney and Christopher D. Manning, “Generating Typed Dependency Parses from Phrase Structure Parses”, Proceedings of the International Conference on Language Resources and Evaluation, Vol. 6, pp. 449-454, 2006.
- Duo Ding et al., “Beyond Audio and Video Retrieval: Towards Multimedia Summarization”, Proceedings of the 2nd ACM International Conference on Multimedia Retrieval, pp. 1-8, 2012.
- Ali Farhadi and Mohsen Hejrati et al., “Every Picture Tells A Story: Generating Sentences from Images”, Proceedings of European Conference on Computer Vision, pp. 15-29, 2010.
- P. Felzenszwalb, D. McAllester and D. Ramanan, “A Discriminatively Trained, Multiscale, Deformable Part Model”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8, 2008.
- Muhammad Usman Ghani Khan and Yoshihiko Gotoh, “Describing Video Contents in Natural Language”, Proceedings of the Workshop on Innovative Hybrid Approaches to the Processing of Textual Data, pp. 27-35, 2012.
- Mrunmayee Patil and Ramesh Kagalkar, “An Automatic Approach for Translating Simple Images into Text Descriptions and Speech for Visually Impaired People”, International Journal of Computer Applications, Vol. 118, No. 3, pp. 14-19, 2015.
- Ivan Laptev and Patrick Perez, “Retrieving Actions in Movies”, Proceedings of the 11th IEEE International Conference on Computer Vision, pp. 1-8, 2007.
- Ivan Laptev, Marcin Marszalek, Cordelia Schmid and Benjamin Rozenfeld, “Learning Realistic Human Actions from Movies”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8, 2008.
- Mun Wai Lee, Asaad Hakeem, Niels Haering and Song-Chun Zhu, “Save: A Framework for Semantic Annotation of Visual Events”, Proceedings of IEEE Computer Vision and Pattern Recognition Workshops, pp. 1-8, 2008.
- Siming Li, Girish Kulkarni, Tamara L. Berg, Alexander C. Berg and Yejin Choi, “Composing Simple Image Descriptions Using Web-Scale N-Grams”, Proceedings of 15th Conference on Computational Natural Language Learning Association for Computational Linguistics, pp. 220-228, 2011.
- Yuri Lin, Jean-Baptiste Michel, Erez Lieberman Aiden, Jon Orwant, Will Brockman and Slav Petrov, “Syntactic Anotations for the Google Books Ngram Corpus”, Proceedings of 50th Annual Meeting of the Association for Computational Linguistics System Demonstrations, pp. 169-174, 2012.
- Tanvi S. Motwani and Raymond J. Mooney, “Improving Video Activity Recognition using Object Recognition and Text Mining”, Proceedings of 20th European Conference on Artificial Intelligence, pp. 600-605, 2012.
- Ben Packer, Kate Saenko and Daphne Koller, “A Combined Pose, Object, and Feature Model for Action Understanding”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1378-1385, 2012.
- Kishore K. Reddy and Mubarak Shah, “Recognizing 50 Human Action Categories of Web Videos”, Machine Vision and Applications, Vol. 24, No. 5, pp. 971-981, 2013.
- Heng Wang, Alexander Klaser, Cordelia Schmid and Cheng-Lin Liu, “Action Recognition by Dense Trajectories”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 3169-3176, 2011.
- Yezhou Yang, Ching Lik Teo, Hal III Daum and Yiannis Aloimonos, “Corpus-Guided Sentence Generation of Natural Images”, Proceedings of Conference on Empirical Methods in Natural Language Processing, pp. 444-454, 2011.
- Bangpeng Yao and Li Fei-Fei, “Modeling Mutual Context of Object and Human Pose in Human-Object Interaction Activities”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 17-24, 2010.
- Mrunmayee Patil and Ramesh Kagalkar, “A Review On Conversion of Image To Text as Well as Speech using Edge Detection and Image Segmentation”, International Journal of Science and Research, Vol. 3, No. 11, pp. 2164-2167, 2014.